18 research outputs found

    Non-Rigid Liver Registration for Laparoscopy using Data-Driven Biomechanical Models

    Get PDF
    During laparoscopic liver resection, the limited access to the organ, the small field of view and lack of palpation can obstruct a surgeon’s workflow. Automatic navigation systems could use the images from preoperative volumetric organ scans to help the surgeons find their target (tumors) and risk-structures (vessels) more efficiently. This requires the preoperative data to be fused (or registered) with the intraoperative scene in order to display information at the correct intraoperative position. One key challenge in this setting is the automatic estimation of the organ’s current intra-operative deformation, which is required in order to predict the position of internal structures. Parameterizing the many patient-specific unknowns (tissue properties, boundary conditions, interactions with other tissues, direction of gravity) is very difficult. Instead, this work explores how to employ deep neural networks to solve the registration problem in a data-driven manner. To this end, convolutional neural networks are trained on synthetic data to estimate an organ’s intraoperative displacement field and thus its current deformation. To drive this estimation, visible surface cues from the intraoperative camera view must be supplied to the networks. Since reliable surface features are very difficult to find, the networks are adapted to also find correspondences between the pre- and intraoperative liver geometry automatically. This combines the search for correspondences with the biomechanical behavior estimation and allows the networks to tackle the full non-rigid registration problem in one single step. The result is a model which can quickly predict the volume deformation of a liver, given only sparse surface information. The model combines the advantages of a physically accurate biomechanical simulation with the speed and powerful feature extraction capabilities of deep neural networks. To test the method intraoperatively, a registration pipeline is developed which constructs a map of the liver and its surroundings from the laparoscopic video and then uses the neural networks to fuse the preoperative volume data into this map. The deformed organ volume can then be rendered as an overlay directly onto the laparoscopic video stream. The focus of this pipeline is to be applicable to real surgery, where everything should be quick and non-intrusive. To meet these requirements, a SLAM system is used to localize the laparoscopic camera (avoiding setup of an external tracking system), various neural networks are used to quickly interpret the scene and semi-automatic tools let the surgeons guide the system. Beyond the concrete advantages of the data-driven approach for intraoperative registration, this work also demonstrates general benefits of training a registration system preoperatively on synthetic data. The method lets the engineer decide which values need to be known explicitly and which should be estimated implicitly by the networks, which opens the door to many new possibilities.:1 Introduction 1.1 Motivation 1.1.1 Navigated Liver Surgery 1.1.2 Laparoscopic Liver Registration 1.2 Challenges in Laparoscopic Liver Registration 1.2.1 Preoperative Model 1.2.2 Intraoperative Data 1.2.3 Fusion/Registration 1.2.4 Data 1.3 Scope and Goals of this Work 1.3.1 Data-Driven, Biomechanical Model 1.3.2 Data-Driven Non-Rigid Registration 1.3.3 Building a Working Prototype 2 State of the Art 2.1 Rigid Registration 2.2 Non-Rigid Liver Registration 2.3 Neural Networks for Simulation and Registration 3 Theoretical Background 3.1 Liver 3.2 Laparoscopic Liver Resection 3.2.1 Staging Procedure 3.3 Biomechanical Simulation 3.3.1 Physical Balance Principles 3.3.2 Material Models 3.3.3 Numerical Solver: The Finite Element Method (FEM) 3.3.4 The Lagrangian Specification 3.4 Variables and Data in Liver Registration 3.4.1 Observable 3.4.2 Unknowns 4 Generating Simulations of Deforming Organs 4.1 Organ Volume 4.2 Forces and Boundary Conditions 4.2.1 Surface Forces 4.2.2 Zero-Displacement Boundary Conditions 4.2.3 Surrounding Tissues and Ligaments 4.2.4 Gravity 4.2.5 Pressure 4.3 Simulation 4.3.1 Static Simulation 4.3.2 Dynamic Simulation 4.4 Surface Extraction 4.4.1 Partial Surface Extraction 4.4.2 Surface Noise 4.4.3 Partial Surface Displacement 4.5 Voxelization 4.5.1 Voxelizing the Liver Geometry 4.5.2 Voxelizing the Displacement Field 4.5.3 Voxelizing Boundary Conditions 4.6 Pruning Dataset - Removing Unwanted Results 4.7 Data Augmentation 5 Deep Neural Networks for Biomechanical Simulation 5.1 Training Data 5.2 Network Architecture 5.3 Loss Functions and Training 6 Deep Neural Networks for Non-Rigid Registration 6.1 Training Data 6.2 Architecture 6.3 Loss 6.4 Training 6.5 Mesh Deformation 6.6 Example Application 7 Intraoperative Prototype 7.1 Image Acquisition 7.2 Stereo Calibration 7.3 Image Rectification, Disparity- and Depth- estimation 7.4 Liver Segmentation 7.4.1 Synthetic Image Generation 7.4.2 Automatic Segmentation 7.4.3 Manual Segmentation Modifier 7.5 SLAM 7.6 Dense Reconstruction 7.7 Rigid Registration 7.8 Non-Rigid Registration 7.9 Rendering 7.10 Robotic Operating System 8 Evaluation 8.1 Evaluation Datasets 8.1.1 In-Silico 8.1.2 Phantom Torso and Liver 8.1.3 In-Vivo, Human, Breathing Motion 8.1.4 In-Vivo, Human, Laparoscopy 8.2 Metrics 8.2.1 Mean Displacement Error 8.2.2 Target Registration Error (TRE) 8.2.3 Champfer Distance 8.2.4 Volumetric Change 8.3 Evaluation of the Synthetic Training Data 8.4 Data-Driven Biomechanical Model (DDBM) 8.4.1 Amount of Intraoperative Surface 8.4.2 Dynamic Simulation 8.5 Volume to Surface Registration Network (V2S-Net) 8.5.1 Amount of Intraoperative Surface 8.5.2 Dependency on Initial Rigid Alignment 8.5.3 Registration Accuracy in Comparison to Surface Noise 8.5.4 Registration Accuracy in Comparison to Material Stiffness 8.5.5 Champfer-Distance vs. Mean Displacement Error 8.5.6 In-vivo, Human Breathing Motion 8.6 Full Intraoperative Pipeline 8.6.1 Intraoperative Reconstruction: SLAM and Intraoperative Map 8.6.2 Full Pipeline on Laparoscopic Human Data 8.7 Timing 9 Discussion 9.1 Intraoperative Model 9.2 Physical Accuracy 9.3 Limitations in Training Data 9.4 Limitations Caused by Difference in Pre- and Intraoperative Modalities 9.5 Ambiguity 9.6 Intraoperative Prototype 10 Conclusion 11 List of Publications List of Figures Bibliograph

    Exploring Semantic Consistency in Unpaired Image Translation to Generate Data for Surgical Applications

    Full text link
    In surgical computer vision applications, obtaining labeled training data is challenging due to data-privacy concerns and the need for expert annotation. Unpaired image-to-image translation techniques have been explored to automatically generate large annotated datasets by translating synthetic images to the realistic domain. However, preserving the structure and semantic consistency between the input and translated images presents significant challenges, mainly when there is a distributional mismatch in the semantic characteristics of the domains. This study empirically investigates unpaired image translation methods for generating suitable data in surgical applications, explicitly focusing on semantic consistency. We extensively evaluate various state-of-the-art image translation models on two challenging surgical datasets and downstream semantic segmentation tasks. We find that a simple combination of structural-similarity loss and contrastive learning yields the most promising results. Quantitatively, we show that the data generated with this approach yields higher semantic consistency and can be used more effectively as training data

    Data-driven Intra-operative Estimation of Anatomical Attachments for Autonomous Tissue Dissection

    Get PDF
    The execution of surgical tasks by an Autonomous Robotic System (ARS) requires an up-to-date model of the current surgical environment, which has to be deduced from measurements collected during task execution. In this work, we propose to automate tissue dissection tasks by introducing a convolutional neural network, called BA-Net, to predict the location of attachment points between adjacent tissues. BA-Net identifies the attachment areas from a single partial view of the deformed surface, without any a-priori knowledge about their location. The proposed method guarantees a very fast prediction time, which makes it ideal for intra-operative applications. Experimental validation is carried out on both simulated and real world phantom data of soft tissue manipulation performed with the da Vinci Research Kit (dVRK). The obtained results demonstrate that BA-Net provides robust predictions at varying geometric configurations, material properties, distributions of attachment points and grasping point locations. The estimation of attachment points provided by BA-Net improves the simulation of the anatomical environment where the system is acting, leading to a median simulation error below 5mm in all the tested conditions. BA-Net can thus further support an ARS by providing a more robust test bench for the robotic actions intra-operatively, in particular when replanning is needed. The method and collected dataset are available at https://gitlab.com/altairLab/banet

    Intra-operative Update of Boundary Conditions for Patient-specific Surgical Simulation

    Get PDF
    Patient-specific Biomechanical Models (PBMs) can enhance computer assisted surgical procedures with critical information. Although pre-operative data allow to parametrize such PBMs based on each patient's properties, they are not able to fully characterize them. In particular, simulation boundary conditions cannot be determined from pre-operative modalities, but their correct definition is essential to improve the PBM predictive capability. In this work, we introduce a pipeline that provides an up-to-date estimate of boundary conditions, starting from the pre-operative model of patient anatomy and the displacement undergone by points visible from an intra-operative vision sensor. The presented pipeline is experimentally validated in realistic conditions on an ex vivo pararenal fat tissue manipulation. We demonstrate its capability to update a PBM reaching clinically acceptable performances, both in terms of accuracy and intra-operative time constraints

    IMHOTEP: cross-professional evaluation of a three-dimensional virtual reality system for interactive surgical operation planning, tumor board discussion and immersive training for complex liver surgery in a head-mounted display

    Get PDF
    Background Virtual reality (VR) with head-mounted displays (HMD) may improve medical training and patient care by improving display and integration of different types of information. The aim of this study was to evaluate among different healthcare professions the potential of an interactive and immersive VR environment for liver surgery that integrates all relevant patient data from different sources needed for planning and training of procedures. Methods 3D-models of the liver, other abdominal organs, vessels, and tumors of a sample patient with multiple hepatic masses were created. 3D-models, clinical patient data, and other imaging data were visualized in a dedicated VR environment with an HMD (IMHOTEP). Users could interact with the data using head movements and a computer mouse. Structures of interest could be selected and viewed individually or grouped. IMHOTEP was evaluated in the context of preoperative planning and training of liver surgery and for the potential of broader surgical application. A standardized questionnaire was voluntarily answered by four groups (students, nurses, resident and attending surgeons). Results In the evaluation by 158 participants (57 medical students, 35 resident surgeons, 13 attending surgeons and 53 nurses), 89.9% found the VR system agreeable to work with. Participants generally agreed that complex cases in particular could be assessed better (94.3%) and faster (84.8%) with VR than with traditional 2D display methods. The highest potential was seen in student training (87.3%), resident training (84.6%), and clinical routine use (80.3%). Least potential was seen in nursing training (54.8%). Conclusions The present study demonstrates that using VR with HMD to integrate all available patient data for the preoperative planning of hepatic resections is a viable concept. VR with HMD promises great potential to improve medical training and operation planning and thereby to achieve improvement in patient care

    Generating large labeled data sets for laparoscopic image processing tasks using unpaired image-to-image translation

    Full text link
    In the medical domain, the lack of large training data sets and benchmarks is often a limiting factor for training deep neural networks. In contrast to expensive manual labeling, computer simulations can generate large and fully labeled data sets with a minimum of manual effort. However, models that are trained on simulated data usually do not translate well to real scenarios. To bridge the domain gap between simulated and real laparoscopic images, we exploit recent advances in unpaired image-to-image translation. We extent an image-to-image translation method to generate a diverse multitude of realistically looking synthetic images based on images from a simple laparoscopy simulation. By incorporating means to ensure that the image content is preserved during the translation process, we ensure that the labels given for the simulated images remain valid for their realistically looking translations. This way, we are able to generate a large, fully labeled synthetic data set of laparoscopic images with realistic appearance. We show that this data set can be used to train models for the task of liver segmentation of laparoscopic images. We achieve average dice scores of up to 0.89 in some patients without manually labeling a single laparoscopic image and show that using our synthetic data to pre-train models can greatly improve their performance. The synthetic data set will be made publicly available, fully labeled with segmentation maps, depth maps, normal maps, and positions of tools and camera (http://opencas.dkfz.de/image2image).Comment: Accepted at MICCAI 201
    corecore